15 research outputs found

    Spatial information in large-scale neural recordings

    Get PDF
    To record from a given neuron, a recording technology must be able to separate the activity of that neuron from the activity of its neighbors. Here, we develop a Fisher information based framework to determine the conditions under which this is feasible for a given technology. This framework combines measurable point spread functions with measurable noise distributions to produce theoretical bounds on the precision with which a recording technology can localize neural activities. If there is sufficient information to uniquely localize neural activities, then a technology will, from an information theoretic perspective, be able to record from these neurons. We (1) describe this framework, and (2) demonstrate its application in model experiments. This method generalizes to many recording devices that resolve objects in space and should be useful in the design of next-generation scalable neural recording systems

    Physical principles for scalable neural recording

    Get PDF
    Simultaneously measuring the activities of all neurons in a mammalian brain at millisecond resolution is a challenge beyond the limits of existing techniques in neuroscience. Entirely new approaches may be required, motivating an analysis of the fundamental physical constraints on the problem. We outline the physical principles governing brain activity mapping using optical, electrical, magnetic resonance, and molecular modalities of neural recording. Focusing on the mouse brain, we analyze the scalability of each method, concentrating on the limitations imposed by spatiotemporal resolution, energy dissipation, and volume displacement. Based on this analysis, all existing approaches require orders of magnitude improvement in key parameters. Electrical recording is limited by the low multiplexing capacity of electrodes and their lack of intrinsic spatial resolution, optical methods are constrained by the scattering of visible light in brain tissue, magnetic resonance is hindered by the diffusion and relaxation timescales of water protons, and the implementation of molecular recording is complicated by the stochastic kinetics of enzymes. Understanding the physical limits of brain activity mapping may provide insight into opportunities for novel solutions. For example, unconventional methods for delivering electrodes may enable unprecedented numbers of recording sites, embedded optical devices could allow optical detectors to be placed within a few scattering lengths of the measured neurons, and new classes of molecularly engineered sensors might obviate cumbersome hardware architectures. We also study the physics of powering and communicating with microscale devices embedded in brain tissue and find that, while radio-frequency electromagnetic data transmission suffers from a severe power–bandwidth tradeoff, communication via infrared light or ultrasound may allow high data rates due to the possibility of spatial multiplexing. The use of embedded local recording and wireless data transmission would only be viable, however, given major improvements to the power efficiency of microelectronic devices

    Physical principles for scalable neural recoding

    Get PDF
    Simultaneously measuring the activities of all neurons in a mammalian brain at millisecond resolution is a challenge beyond the limits of existing techniques in neuroscience. Entirely new approaches may be required, motivating an analysis of the fundamental physical constraints on the problem. We outline the physical principles governing brain activity mapping using optical, electrical, magnetic resonance, and molecular modalities of neural recording. Focusing on the mouse brain, we analyze the scalability of each method, concentrating on the limitations imposed by spatiotemporal resolution, energy dissipation, and volume displacement. Based on this analysis, all existing approaches require orders of magnitude improvement in key parameters. Electrical recording is limited by the low multiplexing capacity of electrodes and their lack of intrinsic spatial resolution, optical methods are constrained by the scattering of visible light in brain tissue, magnetic resonance is hindered by the diffusion and relaxation timescales of water protons, and the implementation of molecular recording is complicated by the stochastic kinetics of enzymes. Understanding the physical limits of brain activity mapping may provide insight into opportunities for novel solutions. For example, unconventional methods for delivering electrodes may enable unprecedented numbers of recording sites, embedded optical devices could allow optical detectors to be placed within a few scattering lengths of the measured neurons, and new classes of molecularly engineered sensors might obviate cumbersome hardware architectures. We also study the physics of powering and communicating with microscale devices embedded in brain tissue and find that, while radio-frequency electromagnetic data transmission suffers from a severe power–bandwidth tradeoff, communication via infrared light or ultrasound may allow high data rates due to the possibility of spatial multiplexing. The use of embedded local recording and wireless data transmission would only be viable, however, given major improvements to the power efficiency of microelectronic devices

    Nucleotide-time alignment for molecular recorders

    No full text
    Using a DNA polymerase to record intracellular calcium levels has been proposed as a novel neural recording technique, promising massive-scale, single-cell resolution monitoring of large portions of the brain. This technique relies on local storage of neural activity in strands of DNA, followed by offline analysis of that DNA. In simple implementations of this scheme, the time when each nucleotide was written cannot be determined directly by post-hoc DNA sequencing; the timing data must be estimated instead. Here, we use a Dynamic Time Warping-based algorithm to perform this estimation, exploiting correlations between neural activity and observed experimental variables to translate DNA-based signals to an estimate of neural activity over time. This algorithm improves the parallelizability of traditional Dynamic Time Warping, allowing several-fold increases in computation speed. The algorithm also provides a solution to several critical problems with the molecular recording paradigm: determining recording start times and coping with DNA polymerase pausing. The algorithm can generally locate DNA-based records to within <10% of a recording window, allowing for the estimation of unobserved incorporation times and latent neural tunings. We apply our technique to an in silico motor control neuroscience experiment, using the algorithm to estimate both timings of DNA-based data and the directional tuning of motor cortical cells during a center-out reaching task. We also use this algorithm to explore the impact of polymerase characteristics on system performance, determining the precision of a molecular recorder as a function of its kinetic and error-generating properties. We find useful ranges of properties for DNA polymerase-based recorders, providing guidance for future protein engineering attempts. This work demonstrates a useful general extension to dynamic alignment algorithms, as well as direct applications of that extension toward the development of molecular recorders, providing a necessary stepping stone for future biological work

    Effects of shuffled dataset on alignment accuracy.

    No full text
    <p>Evaluation of synthetic shuffled dataset on alignment performance. Preferred directions were determined using the best alignment to a set of 8 estimates of neural activity. True neural preferred directions were determined using a generalized linear model trained on x- and y-direction hand velocity. <b>A)</b> Histograms of algorithm-determined preferred directions of 4 selected neurons using the original dataset. Histograms represent relative frequencies over 100 simulated DNA-based records. Dashed line indicates true neural preferred direction. <b>B)</b> Histograms of algorithm-determined preferred directions of 4 selected neurons using a dataset consisting of random 2-second patches of the original dataset. Histograms represent relative frequencies over 100 simulated DNA-based records. Dashed line indicates true neural preferred direction. <b>C)</b> Average absolute error in estimating the preferred directions of 4 selected neurons using either the original or shuffled dataset. Error bars represent bootstrapped 95% confidence intervals over 100 trials.</p

    Effect of DNAP parameters on alignment and tuning estimation.

    No full text
    <p>Examining alignment performance using simulated DNAPs with varying parameters. Bootstrapped 95% confidence intervals of displayed values are indicated by blue silhouettes. <b>A,B)</b> The mean and median timing RMSD of alignments for DNA-based records of increasing length. <b>C)</b> Error in slope estimation for DNA-based records of increasing length. <b>D,E)</b> The mean and median timing RMSD of alignments for DNAPs with decreasing nucleotide incorporation rates. <b>F)</b> Error in slope estimation for DNAPs with decreasing nucleotide incorporation rates. <b>G,H)</b> Mean and median timing RMSD of alignments for DNAPs with increasing sensitivity to [Ca<sup>2+</sup>]. <b>I)</b> Error in slope estimation for DNAPs with increasing sensitivity to [Ca<sup>2+</sup>]. <b>J,K)</b> The mean and median timing RMSD of alignments for DNAPs of increasing maximum error rate. <b>L)</b> Error in slope estimation for DNAPs with increasing maximum error rate.</p

    Overview of data generative model.

    No full text
    <p><b>A)</b> Stochastic generation of <b>T</b>. The incorporation time of a nucleotide, <i>Ï„</i><sub><i>n</i></sub>, is defined as <i>Ï„</i><sub><i>n</i>-1</sub> + <i>U</i> where <i>U</i> is a random variable with a distribution that describes the kinetics of the DNAP being used. <b>B)</b> Stochastic generation of errors. At each incorporation time <i>Ï„</i><sub><i>n</i></sub>, an error is generated with probability . Errors in the nucleotide strand are represented by blue regions, correct incorporations are represented by orange regions.</p

    High-resolution mapping of DNA polymerase fidelity using nucleotide imbalances and next-generation sequencing

    No full text
    DNA polymerase fidelity is affected by both intrinsic properties and environmental conditions. Current strategies for measuring DNA polymerase error rate in vitro are constrained by low error subtype sensitivity, poor scalability, and lack of flexibility in types of sequence contexts that can be tested. We have developed the Magnification via Nucleotide Imbalance Fidelity (MagNIFi) assay, a scalable next-generation sequencing assay that uses a biased deoxynucleotide pool to quantitatively shift error rates into a range where errors are frequent and hence measurement is robust, while still allowing for accurate mapping to error rates under typical conditions. This assay is compatible with a wide range of fidelity-modulating conditions, and enables high-throughput analysis of sequence context effects on base substitution and single nucleotide deletion fidelity using a built-in template library. We validate this assay by comparing to previously established fidelity metrics, and use it to investigate neighboring sequence-mediated effects on fidelity for several DNA polymerases. Through these demonstrations, we establish the MagNIFi assay for robust, high-throughput analysis of DNA polymerase fidelity
    corecore